Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version Packages #184

Merged
merged 2 commits into from
Feb 7, 2025
Merged

Version Packages #184

merged 2 commits into from
Feb 7, 2025

Conversation

github-actions[bot]
Copy link
Contributor

@github-actions github-actions bot commented Feb 3, 2025

This PR was opened by the Changesets release GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated.

Releases

@infinitered/[email protected]

Major Changes

  • 83b7e6e: Standardize API patterns and coordinate structures across ML Kit modules

    1. Separates model operations into three hooks with simpler APIs
      1. loading the models, (useObjectDetectionModels, useImageLabelingModels)
      2. initializing the provider (useObjectDetectionProvider, useImageLabelingProvider)
      3. accessing models for inference, (useObjectDetector, useImageLabeling)
    2. Implements consistent naming patterns to make the APIs more legible
      • Removes "RNMLKit" prefix from non-native types
      • Use specific names for hooks (useImageLabelingModels instead of useModels)
      • Model configs are now Configs, instead of AssetRecords
    3. Moves base types into the core package to ensure consistency
    4. Fixes an issue with bounding box placement on portrait / rotated images on iOS
    5. Improves error handling and state management
    6. Updates documentation to match the new API

    Breaking Changes

    Image Labeling

    • Renamed useModels to useImageLabelingModels for clarity
    • Renamed useImageLabeler to useImageLabeling
    • Introduced new useImageLabelingProvider hook for cleaner context management
    • Added type-safe configurations with ImageLabelingConfig
    • Renamed model context provider from ObjectDetectionModelContextProvider to ImageLabelingModelProvider

    Here's how to update your app:

    Fetching the provider

    - const MODELS: AssetRecord = {
    + const MODELS: ImageLabelingConfig = {
     nsfwDetector: {
       model: require("./assets/models/nsfw-detector.tflite"),
       options: {
         maxResultCount: 5,
         confidenceThreshold: 0.5,
       }
     },
    };
    
    function App() {
    - const { ObjectDetectionModelContextProvider } = useModels(MODELS)
    + const models = useImageLabelingModels(MODELS)
    + const { ImageLabelingModelProvider } = useImageLabelingProvider(models)
    
     return (
    -   <ObjectDetectionModelContextProvider>
    +   <ImageLabelingModelProvider>
         {/* Rest of your app */}
    -   </ObjectDetectionModelContextProvider>
    +   </ImageLabelingModelProvider>
     )
    }

    Using the model

    - const model = useImageLabeler("nsfwDetector")
    + const detector = useImageLabeling("nsfwDetector")
    
    const labels = await detector.classifyImage(imagePath)

    Object Detection

    • useObjectDetectionModels now requires an assets parameter
    • useObjectDetector is now useObjectDetection
    • Introduced new useObjectDetectionProvider hook for context management
    • Renamed and standardized type definitions:
      • RNMLKitObjectDetectionObjectObjectDetectionObject
      • RNMLKitObjectDetectorOptionsObjectDetectorOptions
      • RNMLKitCustomObjectDetectorOptionsCustomObjectDetectorOptions
    • Added new types: ObjectDetectionModelInfo, ObjectDetectionConfig, ObjectDetectionModels
    • Moved model configuration to typed asset records
    • Default model now included in models type union

    Here's how to update your app:

    Fetching the provider

    - const MODELS: AssetRecord = {
    + const MODELS: ObjectDetectionConfig = {
      birdDetector: {
        model: require("./assets/models/bird-detector.tflite"),
        options: {
          shouldEnableClassification: false,
          shouldEnableMultipleObjects: false,
        }
      },
    };
    
    function App() {
    
    - const { ObjectDetectionModelContextProvider } = useObjectDetectionModels({
    -    assets: MODELS,
    -    loadDefaultModel: true,
    -    defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    -   })
    
    + const models = useObjectDetectionModels({
    +   assets: MODELS,
    +   loadDefaultModel: true,
    +   defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    + })
    +
    + const { ObjectDetectionProvider } = useObjectDetectionProvider(models)
    
      return (
    -    <ObjectDetectionModelContextProvider>
    +    <ObjectDetectionProvider>
           {/* Rest of your app */}
    -   </ObjectDetectionModelContextProvider>
    +   </ObjectDetectionProvider>
    )
    }
    

    Using the model

    - const {models: {birdDetector} = useObjectDetectionModels({
    -  assets: MODELS,
    -  loadDefaultModel: true,
    -   defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    - })
    -
    + const birdDetector = useObjectDetection("birdDetector")
    
    const objects = birdDetector.detectObjects(imagePath)

    Face Detection

    • Changed option naming conventions to match ML Kit SDK patterns:
    • detectLandmarkslandmarkMode
    • runClassificationsclassificationMode
    • Changed default performanceMode from accurate to fast
    • Renamed hook from useFaceDetector to useFaceDetection
    • Renamed context provider from RNMLKitFaceDetectionContextProvider to FaceDetectionProvider
    • Added comprehensive error handling
    • Added new state management with FaceDetectionState type

    Here's how to update your app:

    Using the detector

    const options = {
    -  detectLandmarks: true,
    +  landmarkMode: true,
    -  runClassifications: true,
    +  classificationMode: true,
    }

    Using the provider

    - import { RNMLKitFaceDetectionContextProvider } from "@infinitered/react-native-mlkit-face-detection"
    + import { FaceDetectionProvider } from "@infinitered/react-native-mlkit-face-detection"
    
    function App() {
     return (
    -   <RNMLKitFaceDetectionContextProvider>
    +   <FaceDetectionProvider>
         {/* Rest of your app */}
    -   </RNMLKitFaceDetectionContextProvider>
    +   </FaceDetectionProvider>
     )
    }

    Using the hooks

    - const detector = useFaceDetector()
    + const detector = useFaceDetection()
    
    // useFacesInPhoto remains unchanged
    const { faces, status, error } = useFacesInPhoto(imageUri)

    Core Module

    • Introduced shared TypeScript interfaces:
      • ModelInfo<T>
      • AssetRecord<T>
    • Standardized frame coordinate structure
    • Implemented consistent type patterns
  • b668ab0: Upgrade to expo 52

Minor Changes

  • b668ab0: align podspec platform requirements with expo version
  • 213f085: fix: sets static builds in core podspec to prevent dependency conflicts

@infinitered/[email protected]

Major Changes

Patch Changes

@infinitered/[email protected]

Major Changes

Minor Changes

  • b668ab0: align podspec platform requirements with expo version

Patch Changes

@infinitered/[email protected]

Major Changes

  • 83b7e6e: Standardize API patterns and coordinate structures across ML Kit modules

    1. Separates model operations into three hooks with simpler APIs
      1. loading the models, (useObjectDetectionModels, useImageLabelingModels)
      2. initializing the provider (useObjectDetectionProvider, useImageLabelingProvider)
      3. accessing models for inference, (useObjectDetector, useImageLabeling)
    2. Implements consistent naming patterns to make the APIs more legible
      • Removes "RNMLKit" prefix from non-native types
      • Use specific names for hooks (useImageLabelingModels instead of useModels)
      • Model configs are now Configs, instead of AssetRecords
    3. Moves base types into the core package to ensure consistency
    4. Fixes an issue with bounding box placement on portrait / rotated images on iOS
    5. Improves error handling and state management
    6. Updates documentation to match the new API

    Breaking Changes

    Image Labeling

    • Renamed useModels to useImageLabelingModels for clarity
    • Renamed useImageLabeler to useImageLabeling
    • Introduced new useImageLabelingProvider hook for cleaner context management
    • Added type-safe configurations with ImageLabelingConfig
    • Renamed model context provider from ObjectDetectionModelContextProvider to ImageLabelingModelProvider

    Here's how to update your app:

    Fetching the provider

    - const MODELS: AssetRecord = {
    + const MODELS: ImageLabelingConfig = {
     nsfwDetector: {
       model: require("./assets/models/nsfw-detector.tflite"),
       options: {
         maxResultCount: 5,
         confidenceThreshold: 0.5,
       }
     },
    };
    
    function App() {
    - const { ObjectDetectionModelContextProvider } = useModels(MODELS)
    + const models = useImageLabelingModels(MODELS)
    + const { ImageLabelingModelProvider } = useImageLabelingProvider(models)
    
     return (
    -   <ObjectDetectionModelContextProvider>
    +   <ImageLabelingModelProvider>
         {/* Rest of your app */}
    -   </ObjectDetectionModelContextProvider>
    +   </ImageLabelingModelProvider>
     )
    }

    Using the model

    - const model = useImageLabeler("nsfwDetector")
    + const detector = useImageLabeling("nsfwDetector")
    
    const labels = await detector.classifyImage(imagePath)

    Object Detection

    • useObjectDetectionModels now requires an assets parameter
    • useObjectDetector is now useObjectDetection
    • Introduced new useObjectDetectionProvider hook for context management
    • Renamed and standardized type definitions:
      • RNMLKitObjectDetectionObjectObjectDetectionObject
      • RNMLKitObjectDetectorOptionsObjectDetectorOptions
      • RNMLKitCustomObjectDetectorOptionsCustomObjectDetectorOptions
    • Added new types: ObjectDetectionModelInfo, ObjectDetectionConfig, ObjectDetectionModels
    • Moved model configuration to typed asset records
    • Default model now included in models type union

    Here's how to update your app:

    Fetching the provider

    - const MODELS: AssetRecord = {
    + const MODELS: ObjectDetectionConfig = {
      birdDetector: {
        model: require("./assets/models/bird-detector.tflite"),
        options: {
          shouldEnableClassification: false,
          shouldEnableMultipleObjects: false,
        }
      },
    };
    
    function App() {
    
    - const { ObjectDetectionModelContextProvider } = useObjectDetectionModels({
    -    assets: MODELS,
    -    loadDefaultModel: true,
    -    defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    -   })
    
    + const models = useObjectDetectionModels({
    +   assets: MODELS,
    +   loadDefaultModel: true,
    +   defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    + })
    +
    + const { ObjectDetectionProvider } = useObjectDetectionProvider(models)
    
      return (
    -    <ObjectDetectionModelContextProvider>
    +    <ObjectDetectionProvider>
           {/* Rest of your app */}
    -   </ObjectDetectionModelContextProvider>
    +   </ObjectDetectionProvider>
    )
    }
    

    Using the model

    - const {models: {birdDetector} = useObjectDetectionModels({
    -  assets: MODELS,
    -  loadDefaultModel: true,
    -   defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    - })
    -
    + const birdDetector = useObjectDetection("birdDetector")
    
    const objects = birdDetector.detectObjects(imagePath)

    Face Detection

    • Changed option naming conventions to match ML Kit SDK patterns:
    • detectLandmarkslandmarkMode
    • runClassificationsclassificationMode
    • Changed default performanceMode from accurate to fast
    • Renamed hook from useFaceDetector to useFaceDetection
    • Renamed context provider from RNMLKitFaceDetectionContextProvider to FaceDetectionProvider
    • Added comprehensive error handling
    • Added new state management with FaceDetectionState type

    Here's how to update your app:

    Using the detector

    const options = {
    -  detectLandmarks: true,
    +  landmarkMode: true,
    -  runClassifications: true,
    +  classificationMode: true,
    }

    Using the provider

    - import { RNMLKitFaceDetectionContextProvider } from "@infinitered/react-native-mlkit-face-detection"
    + import { FaceDetectionProvider } from "@infinitered/react-native-mlkit-face-detection"
    
    function App() {
     return (
    -   <RNMLKitFaceDetectionContextProvider>
    +   <FaceDetectionProvider>
         {/* Rest of your app */}
    -   </RNMLKitFaceDetectionContextProvider>
    +   </FaceDetectionProvider>
     )
    }

    Using the hooks

    - const detector = useFaceDetector()
    + const detector = useFaceDetection()
    
    // useFacesInPhoto remains unchanged
    const { faces, status, error } = useFacesInPhoto(imageUri)

    Core Module

    • Introduced shared TypeScript interfaces:
      • ModelInfo<T>
      • AssetRecord<T>
    • Standardized frame coordinate structure
    • Implemented consistent type patterns
  • b668ab0: Upgrade to expo 52

Minor Changes

  • b668ab0: align podspec platform requirements with expo version

Patch Changes

@infinitered/[email protected]

Major Changes

  • 83b7e6e: Standardize API patterns and coordinate structures across ML Kit modules

    1. Separates model operations into three hooks with simpler APIs
      1. loading the models, (useObjectDetectionModels, useImageLabelingModels)
      2. initializing the provider (useObjectDetectionProvider, useImageLabelingProvider)
      3. accessing models for inference, (useObjectDetector, useImageLabeling)
    2. Implements consistent naming patterns to make the APIs more legible
      • Removes "RNMLKit" prefix from non-native types
      • Use specific names for hooks (useImageLabelingModels instead of useModels)
      • Model configs are now Configs, instead of AssetRecords
    3. Moves base types into the core package to ensure consistency
    4. Fixes an issue with bounding box placement on portrait / rotated images on iOS
    5. Improves error handling and state management
    6. Updates documentation to match the new API

    Breaking Changes

    Image Labeling

    • Renamed useModels to useImageLabelingModels for clarity
    • Renamed useImageLabeler to useImageLabeling
    • Introduced new useImageLabelingProvider hook for cleaner context management
    • Added type-safe configurations with ImageLabelingConfig
    • Renamed model context provider from ObjectDetectionModelContextProvider to ImageLabelingModelProvider

    Here's how to update your app:

    Fetching the provider

    - const MODELS: AssetRecord = {
    + const MODELS: ImageLabelingConfig = {
     nsfwDetector: {
       model: require("./assets/models/nsfw-detector.tflite"),
       options: {
         maxResultCount: 5,
         confidenceThreshold: 0.5,
       }
     },
    };
    
    function App() {
    - const { ObjectDetectionModelContextProvider } = useModels(MODELS)
    + const models = useImageLabelingModels(MODELS)
    + const { ImageLabelingModelProvider } = useImageLabelingProvider(models)
    
     return (
    -   <ObjectDetectionModelContextProvider>
    +   <ImageLabelingModelProvider>
         {/* Rest of your app */}
    -   </ObjectDetectionModelContextProvider>
    +   </ImageLabelingModelProvider>
     )
    }

    Using the model

    - const model = useImageLabeler("nsfwDetector")
    + const detector = useImageLabeling("nsfwDetector")
    
    const labels = await detector.classifyImage(imagePath)

    Object Detection

    • useObjectDetectionModels now requires an assets parameter
    • useObjectDetector is now useObjectDetection
    • Introduced new useObjectDetectionProvider hook for context management
    • Renamed and standardized type definitions:
      • RNMLKitObjectDetectionObjectObjectDetectionObject
      • RNMLKitObjectDetectorOptionsObjectDetectorOptions
      • RNMLKitCustomObjectDetectorOptionsCustomObjectDetectorOptions
    • Added new types: ObjectDetectionModelInfo, ObjectDetectionConfig, ObjectDetectionModels
    • Moved model configuration to typed asset records
    • Default model now included in models type union

    Here's how to update your app:

    Fetching the provider

    - const MODELS: AssetRecord = {
    + const MODELS: ObjectDetectionConfig = {
      birdDetector: {
        model: require("./assets/models/bird-detector.tflite"),
        options: {
          shouldEnableClassification: false,
          shouldEnableMultipleObjects: false,
        }
      },
    };
    
    function App() {
    
    - const { ObjectDetectionModelContextProvider } = useObjectDetectionModels({
    -    assets: MODELS,
    -    loadDefaultModel: true,
    -    defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    -   })
    
    + const models = useObjectDetectionModels({
    +   assets: MODELS,
    +   loadDefaultModel: true,
    +   defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    + })
    +
    + const { ObjectDetectionProvider } = useObjectDetectionProvider(models)
    
      return (
    -    <ObjectDetectionModelContextProvider>
    +    <ObjectDetectionProvider>
           {/* Rest of your app */}
    -   </ObjectDetectionModelContextProvider>
    +   </ObjectDetectionProvider>
    )
    }
    

    Using the model

    - const {models: {birdDetector} = useObjectDetectionModels({
    -  assets: MODELS,
    -  loadDefaultModel: true,
    -   defaultModelOptions: DEFAULT_MODEL_OPTIONS,
    - })
    -
    + const birdDetector = useObjectDetection("birdDetector")
    
    const objects = birdDetector.detectObjects(imagePath)

    Face Detection

    • Changed option naming conventions to match ML Kit SDK patterns:
    • detectLandmarkslandmarkMode
    • runClassificationsclassificationMode
    • Changed default performanceMode from accurate to fast
    • Renamed hook from useFaceDetector to useFaceDetection
    • Renamed context provider from RNMLKitFaceDetectionContextProvider to FaceDetectionProvider
    • Added comprehensive error handling
    • Added new state management with FaceDetectionState type

    Here's how to update your app:

    Using the detector

    const options = {
    -  detectLandmarks: true,
    +  landmarkMode: true,
    -  runClassifications: true,
    +  classificationMode: true,
    }

    Using the provider

    - import { RNMLKitFaceDetectionContextProvider } from "@infinitered/react-native-mlkit-face-detection"
    + import { FaceDetectionProvider } from "@infinitered/react-native-mlkit-face-detection"
    
    function App() {
     return (
    -   <RNMLKitFaceDetectionContextProvider>
    +   <FaceDetectionProvider>
         {/* Rest of your app */}
    -   </RNMLKitFaceDetectionContextProvider>
    +   </FaceDetectionProvider>
     )
    }

    Using the hooks

    - const detector = useFaceDetector()
    + const detector = useFaceDetection()
    
    // useFacesInPhoto remains unchanged
    const { faces, status, error } = useFacesInPhoto(imageUri)

    Core Module

    • Introduced shared TypeScript interfaces:
      • ModelInfo<T>
      • AssetRecord<T>
    • Standardized frame coordinate structure
    • Implemented consistent type patterns
  • b668ab0: Upgrade to expo 52

Minor Changes

  • b668ab0: align podspec platform requirements with expo version

Patch Changes

[email protected]

Major Changes

Patch Changes

@infinitered/[email protected]

Minor Changes

@infinitered/[email protected]

Minor Changes

@github-actions github-actions bot force-pushed the changeset-release/main branch 22 times, most recently from 0618006 to ca713ab Compare February 7, 2025 16:40
@github-actions github-actions bot force-pushed the changeset-release/main branch from ca713ab to 21bbaa6 Compare February 7, 2025 21:03
@trevor-coleman trevor-coleman merged commit 3b07ad6 into main Feb 7, 2025
2 checks passed
@jamonholmgren jamonholmgren deleted the changeset-release/main branch February 8, 2025 00:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant